Skip to content

feat: RSPEED-2538 add optional verbose metadata to /v1/infer endpoint#1305

Merged
tisnik merged 5 commits intolightspeed-core:mainfrom
Lifto:feat/verbose-infer-metadata
Mar 18, 2026
Merged

feat: RSPEED-2538 add optional verbose metadata to /v1/infer endpoint#1305
tisnik merged 5 commits intolightspeed-core:mainfrom
Lifto:feat/verbose-infer-metadata

Conversation

@Lifto
Copy link
Contributor

@Lifto Lifto commented Mar 10, 2026

Description

Add development/testing feature to return extended debugging metadata from /v1/infer endpoint, similar to /v1/query. Requires dual opt-in: config flag (allow_verbose_infer) and request parameter (include_metadata).

When enabled, returns:

  • tool_calls — MCP tool calls made during inference
  • tool_results — Results from MCP tool calls
  • rag_chunks — RAG chunks retrieved from documentation
  • referenced_documents — Source documents referenced
  • input_tokens / output_tokens — Token usage

Maintains backwards compatibility by excluding null fields from standard responses (via response_model_exclude_none=True).

Type of change

  • Refactor
  • New feature
  • Bug fix
  • CVE fix
  • Optimization
  • Documentation Update
  • Configuration Update
  • Bump-up service version
  • Bump-up dependent library
  • Bump-up library or tool used for development (does not change the final image)
  • CI configuration change
  • Konflux configuration change
  • Unit tests improvement
  • Integration tests improvement
  • End to end tests improvement
  • Benchmarks improvement

Tools used to create PR

Identify any AI code assistants used in this PR (for transparency and review context)

  • Assisted-by: Claude
  • Generated by: N/A if not used

Related Tickets & Documents

  • Related Issue #
  • Closes #

Checklist before requesting a review

  • I have performed a self-review of my code.
  • PR has passed all pre-merge test jobs.
  • If it is a core feature, I have added thorough tests.

Testing

  • Standard request: Call /v1/infer without include_metadata (or with config allow_verbose_infer: false). Response contains only text and request_id; no metadata fields.
  • Verbose disabled: Request with include_metadata: true but config allow_verbose_infer: false returns standard response only.
  • Verbose enabled: With both allow_verbose_infer: true in config and include_metadata: true in the request, response includes tool_calls, tool_results, rag_chunks, referenced_documents, input_tokens, output_tokens.
  • Unit tests: test_infer_minimal_request asserts standard response has only text/request_id and no metadata; test_infer_include_metadata_returns_verbose_response asserts verbose path returns all metadata fields and token counts. Run: uv run pytest tests/unit/app/endpoints/test_rlsapi_v1.py -v -k "infer_minimal or infer_include_metadata".

Summary by CodeRabbit

  • New Features

    • Optional verbose metadata mode for inference responses: when enabled, responses can include tool calls, tool results, retrieved RAG chunks, referenced documents, and token usage.
    • Requests can opt in to include metadata; availability is gated by a configuration flag.
    • Responses now omit null fields from the returned payload.
  • Tests

    • Added unit tests covering verbose-metadata on/off behavior and preservation of minimal responses.

Add development/testing feature to return extended debugging metadata
from /v1/infer endpoint, similar to /v1/query. Requires dual opt-in:
config flag (allow_verbose_infer) and request parameter (include_metadata).

When enabled, returns tool_calls, tool_results, rag_chunks,
referenced_documents, and token counts. Maintains backwards compatibility
by excluding null fields from standard responses.

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Mar 10, 2026

Note

Reviews paused

It looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the reviews.auto_review.auto_pause_after_reviewed_commits setting.

Use the following commands to manage reviews:

  • @coderabbitai resume to resume automatic reviews.
  • @coderabbitai review to trigger a single review.

Use the checkboxes below for quick actions:

  • ▶️ Resume reviews
  • 🔍 Trigger review

Walkthrough

Adds an opt-in verbose metadata mode to the /infer endpoint: a new config flag and request field gate returning full LLM response metadata (tool calls/results, RAG chunks, referenced documents, token usage, turn summary) alongside standard text/request_id; route now uses response_model_exclude_none=True.

Changes

Cohort / File(s) Summary
Configuration
src/models/config.py
Added allow_verbose_infer: bool = False to Customization to gate verbose inference.
API Request / Response Models
src/models/rlsapi/requests.py, src/models/rlsapi/responses.py
Added include_metadata: bool to RlsapiV1InferRequest. Extended RlsapiV1InferData with optional tool_calls, tool_results, rag_chunks, referenced_documents, input_tokens, output_tokens; updated imports for new model types.
Endpoint Logic
src/app/endpoints/rlsapi_v1.py
Added conditional verbose path: when config.allow_verbose_infer and request.include_metadata are true, fetch full LLM response via LlamaStack, extract output, usage, tool/rag data, build turn_summary, and populate verbose fields; non-verbose path returns only text and request_id. Updated route decorator with response_model_exclude_none=True.
Tests
tests/unit/app/endpoints/test_rlsapi_v1.py
Added tests for verbose metadata behavior: ensure verbose fields are populated when enabled and ignored (None/omitted) when disabled; updated minimal response assertions.

Sequence Diagram(s)

sequenceDiagram
  participant Client as Client
  participant API as API (/infer)
  participant Llama as LlamaStack
  participant Builder as Response Builder

  Client->>API: POST /infer (include_metadata=true?)
  API-->>API: evaluate config.allow_verbose_infer && request.include_metadata
  alt verbose enabled
    API->>Llama: request full inference (full response)
    Llama-->>API: full response (output, usage, tool_calls, rag_chunks, referenced_documents)
    API->>Builder: build_turn_summary(response.output)
    Builder-->>API: turn_summary
    API-->>Client: RlsapiV1InferResponse { text, request_id, tool_calls, tool_results, rag_chunks, referenced_documents, input_tokens, output_tokens, turn_summary }
  else verbose disabled
    API->>Llama: request simple inference (text)
    Llama-->>API: simple text
    API-->>Client: RlsapiV1InferResponse { text, request_id }
  end
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~25 minutes

🚥 Pre-merge checks | ✅ 3
✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title accurately describes the main change: adding optional verbose metadata to the /v1/infer endpoint, which is the primary focus of all file changes in this PR.
Docstring Coverage ✅ Passed Docstring coverage is 100.00% which is sufficient. The required threshold is 80.00%.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
📝 Coding Plan
  • Generate coding plan for human review comments

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@Lifto Lifto marked this pull request as draft March 10, 2026 20:55
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (1)
src/app/endpoints/rlsapi_v1.py (1)

464-483: Consider extracting shared response creation logic.

The verbose path duplicates the client.responses.create() call that exists in retrieve_simple_response(). Consider refactoring to avoid duplication.

♻️ Proposed refactor to reduce duplication
 async def retrieve_simple_response(
     question: str,
     instructions: str,
     tools: Optional[list[Any]] = None,
     model_id: Optional[str] = None,
-) -> str:
+    return_full_response: bool = False,
+) -> str | OpenAIResponseObject:
     """Retrieve a simple response from the LLM for a stateless query.
-
-    Uses the Responses API for simple stateless inference, consistent with
-    other endpoints (query, streaming_query).
-
-    Args:
-        question: The combined user input (question + context).
-        instructions: System instructions for the LLM.
-        tools: Optional list of MCP tool definitions for the LLM.
-        model_id: Fully qualified model identifier in provider/model format.
-            When omitted, the configured default model is used.
-
-    Returns:
-        The LLM-generated response text.
-
-    Raises:
-        APIConnectionError: If the Llama Stack service is unreachable.
-        HTTPException: 503 if no default model is configured.
+    ...
+    Args:
+        ...
+        return_full_response: If True, return the full OpenAIResponseObject.
+
+    Returns:
+        The LLM-generated response text, or full response object if requested.
     """
     client = AsyncLlamaStackClientHolder().get_client()
     resolved_model_id = model_id or await _get_default_model_id()
     logger.debug("Using model %s for rlsapi v1 inference", resolved_model_id)

     response = await client.responses.create(
         input=question,
         model=resolved_model_id,
         instructions=instructions,
         tools=tools or [],
         stream=False,
         store=False,
     )
     response = cast(OpenAIResponseObject, response)
-    extract_token_usage(response.usage, resolved_model_id)
 
-    return extract_text_from_response_items(response.output)
+    if return_full_response:
+        return response
+
+    extract_token_usage(response.usage, resolved_model_id)
+    return extract_text_from_response_items(response.output)

Then in infer_endpoint:

-        if verbose_enabled:
-            client = AsyncLlamaStackClientHolder().get_client()
-            response = await client.responses.create(
-                input=input_source,
-                model=model_id,
-                instructions=instructions,
-                tools=mcp_tools or [],
-                stream=False,
-                store=False,
-            )
-            response = cast(OpenAIResponseObject, response)
-            response_text = extract_text_from_response_items(response.output)
-        else:
-            response = None
-            response_text = await retrieve_simple_response(...)
+        if verbose_enabled:
+            response = await retrieve_simple_response(
+                input_source, instructions, tools=mcp_tools,
+                model_id=model_id, return_full_response=True
+            )
+            response = cast(OpenAIResponseObject, response)
+            response_text = extract_text_from_response_items(response.output)
+        else:
+            response = None
+            response_text = await retrieve_simple_response(
+                input_source, instructions, tools=mcp_tools, model_id=model_id
+            )
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/app/endpoints/rlsapi_v1.py` around lines 464 - 483, The verbose and
non-verbose branches duplicate the client.responses.create call; refactor by
extracting the shared response-creation logic into a single helper (for example
a new function used by infer_endpoint and retrieve_simple_response) that calls
AsyncLlamaStackClientHolder().get_client().responses.create with parameters
(input_source, model_id, instructions, tools, stream, store) and returns the
response object/text; then have infer_endpoint call that helper (or have
retrieve_simple_response delegate to it) and remove the duplicated
client.responses.create usage in the verbose branch so both paths share the same
implementation.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Nitpick comments:
In `@src/app/endpoints/rlsapi_v1.py`:
- Around line 464-483: The verbose and non-verbose branches duplicate the
client.responses.create call; refactor by extracting the shared
response-creation logic into a single helper (for example a new function used by
infer_endpoint and retrieve_simple_response) that calls
AsyncLlamaStackClientHolder().get_client().responses.create with parameters
(input_source, model_id, instructions, tools, stream, store) and returns the
response object/text; then have infer_endpoint call that helper (or have
retrieve_simple_response delegate to it) and remove the duplicated
client.responses.create usage in the verbose branch so both paths share the same
implementation.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: 534d060a-0ac8-4214-8bda-9ec1b42ff84d

📥 Commits

Reviewing files that changed from the base of the PR and between de8a85a and 4c6b955.

📒 Files selected for processing (4)
  • src/app/endpoints/rlsapi_v1.py
  • src/models/config.py
  • src/models/rlsapi/requests.py
  • src/models/rlsapi/responses.py

@Lifto Lifto marked this pull request as ready for review March 11, 2026 15:55
@major
Copy link
Contributor

major commented Mar 17, 2026

@Lifto This looks okay for something temporary, but it sure would be nice if you added some tests. 😉

@Lifto Lifto marked this pull request as draft March 17, 2026 19:20
@Lifto
Copy link
Contributor Author

Lifto commented Mar 17, 2026

@Lifto This looks okay for something temporary, but it sure would be nice if you added some tests. 😉

tests added, PR uses template, CI passes

@Lifto Lifto marked this pull request as ready for review March 17, 2026 20:46
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (1)
tests/unit/app/endpoints/test_rlsapi_v1.py (1)

650-702: Add the missing dual-opt-in negative-path test.

This covers the happy path well, but it should also verify that metadata is not returned when include_metadata=True and allow_verbose_infer=False (to lock in the dual-gate contract).

Suggested test addition
+async def test_infer_include_metadata_ignored_when_verbose_infer_disabled(
+    mocker: MockerFixture,
+    mock_configuration: AppConfig,
+    mock_auth_resolvers: None,
+) -> None:
+    """Metadata should remain excluded unless both request and config opt in."""
+    custom_mock = mocker.Mock()
+    custom_mock.allow_verbose_infer = False
+    custom_mock.system_prompt = "You are a helpful assistant."
+    config_mock = mocker.Mock()
+    config_mock.inference = mock_configuration.inference
+    config_mock.customization = custom_mock
+    mocker.patch("app.endpoints.rlsapi_v1.configuration", config_mock)
+
+    mock_response = mocker.Mock()
+    mock_response.output = [
+        _create_mock_response_output(mocker, "Response with metadata disabled.")
+    ]
+    mock_usage = mocker.Mock()
+    mock_usage.input_tokens = 99
+    mock_usage.output_tokens = 11
+    mock_response.usage = mock_usage
+    _setup_responses_mock(mocker, mocker.AsyncMock(return_value=mock_response))
+
+    infer_request = RlsapiV1InferRequest(
+        question="How do I list files?", include_metadata=True
+    )
+    mock_request = _create_mock_request(mocker)
+    mock_background_tasks = _create_mock_background_tasks(mocker)
+
+    response = await infer_endpoint(
+        infer_request=infer_request,
+        request=mock_request,
+        background_tasks=mock_background_tasks,
+        auth=MOCK_AUTH,
+    )
+
+    assert response.data.tool_calls is None
+    assert response.data.tool_results is None
+    assert response.data.rag_chunks is None
+    assert response.data.referenced_documents is None
+    assert response.data.input_tokens is None
+    assert response.data.output_tokens is None
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/unit/app/endpoints/test_rlsapi_v1.py` around lines 650 - 702, Add a new
unit test mirroring test_infer_include_metadata_returns_verbose_response but
with the module-level configuration's customization.allow_verbose_infer set to
False; construct an RlsapiV1InferRequest(include_metadata=True), call
infer_endpoint (same mocks as the existing test), and assert the returned
RlsapiV1InferResponse does NOT include metadata fields (tool_calls,
tool_results, rag_chunks, referenced_documents) and that token fields are absent
or zero as appropriate, ensuring the dual-opt-in behavior enforced by
configuration.customization.allow_verbose_infer is validated.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Nitpick comments:
In `@tests/unit/app/endpoints/test_rlsapi_v1.py`:
- Around line 650-702: Add a new unit test mirroring
test_infer_include_metadata_returns_verbose_response but with the module-level
configuration's customization.allow_verbose_infer set to False; construct an
RlsapiV1InferRequest(include_metadata=True), call infer_endpoint (same mocks as
the existing test), and assert the returned RlsapiV1InferResponse does NOT
include metadata fields (tool_calls, tool_results, rag_chunks,
referenced_documents) and that token fields are absent or zero as appropriate,
ensuring the dual-opt-in behavior enforced by
configuration.customization.allow_verbose_infer is validated.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: be568bdc-eb2b-4252-84ae-52ec181b944d

📥 Commits

Reviewing files that changed from the base of the PR and between 4c6b955 and 8f3f247.

📒 Files selected for processing (1)
  • tests/unit/app/endpoints/test_rlsapi_v1.py

@Lifto Lifto changed the title feat: add optional verbose metadata to /v1/infer endpoint feat: RSPEED-2538 add optional verbose metadata to /v1/infer endpoint Mar 18, 2026
Copy link
Contributor

@tisnik tisnik left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, thank you

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@src/app/endpoints/rlsapi_v1.py`:
- Around line 462-475: Verbose branch calls
AsyncLlamaStackClientHolder().get_client().responses.create(...) and uses
extract_text_from_response_items(response.output) but does not record token
metrics; update the verbose path to call extract_token_usage(response.usage,
model_id) after casting the response (same place as
extract_text_from_response_items) so metrics recorded by
_increment_llm_call_metric are executed (mirror what retrieve_simple_response
does) ensuring consistent token usage tracking for verbose requests.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: 9333c776-7f08-481b-9073-1915a75af231

📥 Commits

Reviewing files that changed from the base of the PR and between 4c82847 and e9b6df8.

📒 Files selected for processing (5)
  • src/app/endpoints/rlsapi_v1.py
  • src/models/config.py
  • src/models/rlsapi/requests.py
  • src/models/rlsapi/responses.py
  • tests/unit/app/endpoints/test_rlsapi_v1.py
🚧 Files skipped from review as they are similar to previous changes (2)
  • src/models/config.py
  • src/models/rlsapi/requests.py

Comment on lines +462 to +475

# For verbose mode, retrieve the full response object instead of just text
if verbose_enabled:
client = AsyncLlamaStackClientHolder().get_client()
response = await client.responses.create(
input=input_source,
model=model_id,
instructions=instructions,
tools=mcp_tools or [],
stream=False,
store=False,
)
response = cast(OpenAIResponseObject, response)
response_text = extract_text_from_response_items(response.output)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Verbose path may skip token usage metrics recording.

The non-verbose path calls retrieve_simple_response(), which invokes extract_token_usage() to record token usage metrics (via _increment_llm_call_metric()). The verbose path directly calls client.responses.create() but does not call extract_token_usage(), so token usage metrics may not be recorded for verbose requests.

Consider calling extract_token_usage(response.usage, model_id) in the verbose path to ensure consistent metrics tracking:

Proposed fix
             response = cast(OpenAIResponseObject, response)
             response_text = extract_text_from_response_items(response.output)
+            extract_token_usage(response.usage, model_id)
         else:
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/app/endpoints/rlsapi_v1.py` around lines 462 - 475, Verbose branch calls
AsyncLlamaStackClientHolder().get_client().responses.create(...) and uses
extract_text_from_response_items(response.output) but does not record token
metrics; update the verbose path to call extract_token_usage(response.usage,
model_id) after casting the response (same place as
extract_text_from_response_items) so metrics recorded by
_increment_llm_call_metric are executed (mirror what retrieve_simple_response
does) ensuring consistent token usage tracking for verbose requests.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (2)
tests/unit/app/endpoints/test_rlsapi_v1.py (2)

656-708: Add a serialized-shape assertion for the verbose path.

You already validate non-None fields; consider also asserting the serialized data keys to pin the public payload contract end-to-end.

Suggested test hardening
     assert response.data.input_tokens == 42
     assert response.data.output_tokens == 18
+    data_keys = set(response.model_dump(exclude_none=True)["data"].keys())
+    assert data_keys == {
+        "text",
+        "request_id",
+        "tool_calls",
+        "tool_results",
+        "rag_chunks",
+        "referenced_documents",
+        "input_tokens",
+        "output_tokens",
+    }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/unit/app/endpoints/test_rlsapi_v1.py` around lines 656 - 708, Add a
serialized-shape assertion in
test_infer_include_metadata_returns_verbose_response to lock the public payload
keys by converting the returned response data to its dict/serialized form (e.g.,
via response.data.dict() or response.dict()["data"]) and asserting the expected
set of keys is present; specifically ensure keys like "text", "request_id",
"tool_calls", "tool_results", "rag_chunks", "referenced_documents",
"input_tokens", and "output_tokens" are included alongside any other mandatory
keys your API returns so the verbose path contract is asserted end-to-end.

710-753: Also assert key omission when include_metadata=True but config disables verbose.

This branch is the security/compatibility edge of dual opt-in; adding a serialized-key check ensures metadata is truly excluded in output, not just None in the model instance.

Suggested assertion to lock response shape
     assert response.data.input_tokens is None
     assert response.data.output_tokens is None
+    data_keys = set(response.model_dump(exclude_none=True)["data"].keys())
+    assert data_keys == {
+        "text",
+        "request_id",
+    }, f"Expected only text and request_id, got {data_keys}"
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/unit/app/endpoints/test_rlsapi_v1.py` around lines 710 - 753, The test
test_infer_include_metadata_ignored_when_verbose_infer_disabled should also
assert that the serialized response omits metadata keys, not just that model
attributes are None; after calling infer_endpoint, serialize the response (e.g.,
response.dict() or response.json() into a dict) and assert that keys
["tool_calls","tool_results","rag_chunks","referenced_documents","input_tokens","output_tokens"]
are not present under the "data" object; update this test to perform that
serialized-key absence check so infer_endpoint / RlsapiV1InferRequest behavior
is enforced end-to-end.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Nitpick comments:
In `@tests/unit/app/endpoints/test_rlsapi_v1.py`:
- Around line 656-708: Add a serialized-shape assertion in
test_infer_include_metadata_returns_verbose_response to lock the public payload
keys by converting the returned response data to its dict/serialized form (e.g.,
via response.data.dict() or response.dict()["data"]) and asserting the expected
set of keys is present; specifically ensure keys like "text", "request_id",
"tool_calls", "tool_results", "rag_chunks", "referenced_documents",
"input_tokens", and "output_tokens" are included alongside any other mandatory
keys your API returns so the verbose path contract is asserted end-to-end.
- Around line 710-753: The test
test_infer_include_metadata_ignored_when_verbose_infer_disabled should also
assert that the serialized response omits metadata keys, not just that model
attributes are None; after calling infer_endpoint, serialize the response (e.g.,
response.dict() or response.json() into a dict) and assert that keys
["tool_calls","tool_results","rag_chunks","referenced_documents","input_tokens","output_tokens"]
are not present under the "data" object; update this test to perform that
serialized-key absence check so infer_endpoint / RlsapiV1InferRequest behavior
is enforced end-to-end.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: 274d33e0-9b77-475a-a383-71c22b97fe37

📥 Commits

Reviewing files that changed from the base of the PR and between e9b6df8 and c111131.

📒 Files selected for processing (1)
  • tests/unit/app/endpoints/test_rlsapi_v1.py

@tisnik tisnik merged commit 8eb0b40 into lightspeed-core:main Mar 18, 2026
21 of 22 checks passed
@Lifto Lifto deleted the feat/verbose-infer-metadata branch March 18, 2026 20:44
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants